Case-Based Learning of Applicability Conditions for Stochastic Explanations

نویسندگان

  • Giulio Finestrali
  • Hector Muñoz-Avila
چکیده

This paper studies the problem of explaining events in stochastic environments. We explore three ideas to address this problem: (1) Using the notion of Stochastic Explanation, which associates with any event a probability distribution over possible plausible explanations for the event. (2) Retaining as cases (event, stochastic explanation) pairs when unprecedented events occur. (3) Learning the probability distribution in the stochastic explanation as cases are reused. We claim that a system using stochastic explanations reacts faster to abrupt changes in the environment than a system using deterministic explanations. We demonstrate this claim in a CBR system, incorporating the 3 ideas above, while playing a real-time strategy game. We observe how the CBR system when using stochastic explanations reacts faster to abrupt changes in the environment than when using deterministic explanations.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Evaluation of remote sensing indicators in drought monitoring using machine learning algorithms (Case study: Marivan city)

Remote sensing indices are used to analyze the Spatio-temporal distribution of drought conditions and to identify the severity of drought. This study, using various drought indices generated from Madis and TRMM satellite data extracted from Google Earth Engine (GEE) platform. Drought conditions in Marivan city from February to November for the years 2001 to 2017 were analyzed based on spatial a...

متن کامل

On the insufficiency of existing momentum schemes for Stochastic Optimization

Momentum based stochastic gradient methods such as heavy ball (HB) and Nesterov’s accelerated gradient descent (NAG) method are widely used in practice for training deep networks and other supervised learning models, as they often provide significant improvements over stochastic gradient descent (SGD). Rigorously speaking, “fast gradient” methods have provable improvements over gradient descent...

متن کامل

On the Insufficiency of Existing Momentum Schemes for Stochastic Optimization

Momentum based stochastic gradient methods such as heavy ball (HB) and Nesterov’s accelerated gradient descent (NAG) method are widely used in practice for training deep networks and other supervised learning models, as they often provide significant improvements over stochastic gradient descent (SGD). In general, “fast gradient” methods have provable improvements over gradient descent only for...

متن کامل

On the Insufficiency of Existing Momentum Schemes for Stochastic Optimization

Momentum based stochastic gradient methods such as heavy ball (HB) and Nesterov’s accelerated gradient descent (NAG) method are widely used in practice for training deep networks and other supervised learning models, as they often provide significant improvements over stochastic gradient descent (SGD). In general, “fast gradient” methods have provable improvements over gradient descent only for...

متن کامل

On the Insufficiency of Existing Momentum Schemes for Stochastic Optimization

Momentum based stochastic gradient methods such as heavy ball (HB) and Nesterov’s accelerated gradient descent (NAG) method are widely used in practice for training deep networks and other supervised learning models, as they often provide significant improvements over stochastic gradient descent (SGD). Theoretically, these “fast gradient” methods have provable improvements over gradient descent...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013